The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In contrastive self-supervised learning, the common way to learn discriminative representation is to pull different augmented "views" of the same image closer while pushing all other images further apart, which has been proven to be effective. However, it is unavoidable to construct undesirable views containing different semantic concepts during the augmentation procedure. It would damage the semantic consistency of representation to pull these augmentations closer in the feature space indiscriminately. In this study, we introduce feature-level augmentation and propose a novel semantics-consistent feature search (SCFS) method to mitigate this negative effect. The main idea of SCFS is to adaptively search semantics-consistent features to enhance the contrast between semantics-consistent regions in different augmentations. Thus, the trained model can learn to focus on meaningful object regions, improving the semantic representation ability. Extensive experiments conducted on different datasets and tasks demonstrate that SCFS effectively improves the performance of self-supervised learning and achieves state-of-the-art performance on different downstream tasks.
translated by 谷歌翻译
Spatio-temporal machine learning is critically needed for a variety of societal applications, such as agricultural monitoring, hydrological forecast, and traffic management. These applications greatly rely on regional features that characterize spatial and temporal differences. However, spatio-temporal data are often complex and pose several unique challenges for machine learning models: 1) multiple models are needed to handle region-based data patterns that have significant spatial heterogeneity across different locations; 2) local models trained on region-specific data have limited ability to adapt to other regions that have large diversity and abnormality; 3) spatial and temporal variations entangle data complexity that requires more robust and adaptive models; 4) limited spatial-temporal data in real scenarios (e.g., crop yield data is collected only once a year) makes the problems intrinsically challenging. To bridge these gaps, we propose task-adaptive formulations and a model-agnostic meta-learning framework that ensembles regionally heterogeneous data into location-sensitive meta tasks. We conduct task adaptation following an easy-to-hard task hierarchy in which different meta models are adapted to tasks of different difficulty levels. One major advantage of our proposed method is that it improves the model adaptation to a large number of heterogeneous tasks. It also enhances the model generalization by automatically adapting the meta model of the corresponding difficulty level to any new tasks. We demonstrate the superiority of our proposed framework over a diverse set of baselines and state-of-the-art meta-learning frameworks. Our extensive experiments on real crop yield data show the effectiveness of the proposed method in handling spatial-related heterogeneous tasks in real societal applications.
translated by 谷歌翻译
Single-cell technologies are revolutionizing the entire field of biology. The large volumes of data generated by single-cell technologies are high-dimensional, sparse, heterogeneous, and have complicated dependency structures, making analyses using conventional machine learning approaches challenging and impractical. In tackling these challenges, deep learning often demonstrates superior performance compared to traditional machine learning methods. In this work, we give a comprehensive survey on deep learning in single-cell analysis. We first introduce background on single-cell technologies and their development, as well as fundamental concepts of deep learning including the most popular deep architectures. We present an overview of the single-cell analytic pipeline pursued in research applications while noting divergences due to data sources or specific applications. We then review seven popular tasks spanning through different stages of the single-cell analysis pipeline, including multimodal integration, imputation, clustering, spatial domain identification, cell-type deconvolution, cell segmentation, and cell-type annotation. Under each task, we describe the most recent developments in classical and deep learning methods and discuss their advantages and disadvantages. Deep learning tools and benchmark datasets are also summarized for each task. Finally, we discuss the future directions and the most recent challenges. This survey will serve as a reference for biologists and computer scientists, encouraging collaborations.
translated by 谷歌翻译
在模板和搜索区域之间学习强大的功能匹配对于3D暹罗跟踪至关重要。暹罗功能匹配的核心是如何在模板和搜索区域之间的相应点上分配高特征相似性,以进行精确的对象本地化。在本文中,我们提出了一个新颖的点云登记驱动的暹罗跟踪框架,直觉是空间对齐相应点(通过3D注册)倾向于实现一致的特征表示。具体而言,我们的方法由两个模块组成,包括特定于特定的非局部注册模块和一个注册辅助的sindhorn模板 - 特征聚合模块。登记模块在模板和搜索区域之间的精确空间对齐中进行目标。提出了跟踪特异性的空间距离约束,以优化非局部模块中的交叉注意权重,以进行判别特征学习。然后,我们使用加权SVD来计算模板和搜索区域之间的刚性转换,并对齐它们以实现所需的空间对齐相应点。对于特征聚合模型,我们将转换模板和搜索区域之间的特征匹配作为最佳传输问题,并利用Sinkhorn优化来搜索异常型匹配匹配解决方案。同样,建造了登记辅助空间距离图,以改善无法区分的区域(例如光滑的表面)的匹配鲁棒性。最后,在获得的功能匹配地图的指导下,我们将目标信息从模板中汇总到搜索区域中以构建特定于目标的特征,然后将其馈送到一个类似中心点的检测头中以进行对象定位。关于Kitti,Nuscenes和Waymo数据集的广泛实验验证了我们提出的方法的有效性。
translated by 谷歌翻译
组合多个传感器使机器人能够最大程度地提高其对环境的感知意识,并增强其对外部干扰的鲁棒性,对机器人导航至关重要。本文提出了可融合的基准测试,这是一个完整的多传感器数据集,具有多种移动机器人序列。本文提出了三项贡献。我们首先推进便携式和通用的多传感器套件,可提供丰富的感官测量值:10Hz激光镜点云,20Hz立体声框架图像,来自立体声事件相机的高速率和异步事件,来自IMU的200Hz惯性读数以及10Hz GPS信号。传感器已经在硬件中暂时同步。该设备轻巧,独立,并为移动机器人提供插件支持。其次,我们通过收集17个序列来构建数据集,该序列通过利用多个机器人平台进行数据收集来涵盖校园上各种环境。一些序列对现有的SLAM算法具有挑战性。第三,我们为将本地化和映射绩效评估提供了基础真理。我们还评估最新的大满贯方法并确定其局限性。该数据集将发布由原始传感器的设置,地面真相,校准数据和评估算法组成:https://ram-lab.com/file/site/site/multi-sensor-dataset。
translated by 谷歌翻译
OD区域对之间的原点污染(OD)矩阵记录定向流数据。矩阵中复杂的时空依赖性使OD矩阵预测(ODMF)问题不仅可以棘手,而且是非平凡的。但是,大多数相关方法都是为在特定的应用程序方案中预测非常短的序列时间序列而设计的,在特定的应用程序场景中,该方法无法满足方案和预测实用应用长度的差异要求。为了解决这些问题,我们提出了一个名为Odformer的类似变压器的模型,具有两个显着特征:(i)新型的OD注意机制,该机制捕获了相同起源(目的地)之间的特殊空间依赖性,可大大提高与捕获OD区域之间空间依赖关系的2D-GCN结合后,预测交叉应用方案的模型。 (ii)一个时期的自我注意力,可以有效地预测长序列OD矩阵序列,同时适应不同情况下的周期性差异。在三个应用程序背景(即运输流量,IP骨干网络流量,人群流)中进行的慷慨实验表明,我们的方法的表现优于最新方法。
translated by 谷歌翻译
无监督的域对点云语义分割的适应性引起了极大的关注,因为它在没有标记的数据中学习有效性。大多数现有方法都使用全局级特征对齐方式将知识从源域转移到目标域,这可能会导致特征空间的语义歧义。在本文中,我们提出了一个基于图形的框架,以探索两个域之间的局部特征对齐,可以在适应过程中保留语义歧视。具体而言,为了提取本地级特征,我们首先在两个域上动态构建本地特征图,并使用来自源域的图形构建存储库。特别是,我们使用最佳传输来生成图形匹配对。然后,基于分配矩阵,我们可以将两个域之间的特征分布与基于图的本地特征损失对齐。此外,我们考虑了不同类别的特征之间的相关性,并制定了类别引导的对比损失,以指导分割模型以学习目标域上的区分特征。对不同的合成到现实和真实域的适应情景进行了广泛的实验表明,我们的方法可以实现最先进的性能。
translated by 谷歌翻译
在线和离线手写的中文文本识别(HTCR)已经研究了数十年。早期方法采用了基于过度裂段的策略,但遭受低速,准确性不足和角色分割注释的高成本。最近,基于连接主义者时间分类(CTC)和注意机制的无分割方法主导了HCTR的领域。但是,人们实际上是按字符读取文本的,尤其是对于中文等意识形态图。这就提出了一个问题:无细分策略真的是HCTR的最佳解决方案吗?为了探索此问题,我们提出了一种基于细分的新方法,用于识别使用简单但有效的完全卷积网络实现的手写中文文本。提出了一种新型的弱监督学习方法,以使网络仅使用笔录注释进行训练。因此,可以避免以前基于细分的方法所需的昂贵字符分割注释。由于缺乏完全卷积网络中的上下文建模,我们提出了一种上下文正则化方法,以在培训阶段将上下文信息集成到网络中,这可以进一步改善识别性能。在四个广泛使用的基准测试中进行的广泛实验,即Casia-HWDB,Casia-Olhwdb,ICDAR2013和Scut-HCCDOC,表明我们的方法在线和离线HCTR上都显着超过了现有方法,并且表现出比CTC/ CTC/ CTC/ CTC/ CTC/速度高得多的方法。基于注意力的方法。
translated by 谷歌翻译
对比学习在图表学习领域表现出了巨大的希望。通过手动构建正/负样本,大多数图对比度学习方法依赖于基于矢量内部产品的相似性度量标准来区分图形表示样品。但是,手工制作的样品构建(例如,图表的节点或边缘的扰动)可能无法有效捕获图形的固有局部结构。同样,基于矢量内部产品的相似性度量标准无法完全利用图形的局部结构来表征图差。为此,在本文中,我们提出了一种基于自适应子图生成的新型对比度学习框架,以实现有效且强大的自我监督图表示学习,并且最佳传输距离被用作子绘图之间的相似性度量。它的目的是通过捕获图的固有结构来生成对比样品,并根据子图的特征和结构同时区分样品。具体而言,对于每个中心节点,通过自适应学习关系权重与相应邻域的节点,我们首先开发一个网络来生成插值子图。然后,我们分别构建来自相同和不同节点的子图的正和负对。最后,我们采用两种类型的最佳运输距离(即Wasserstein距离和Gromov-Wasserstein距离)来构建结构化的对比损失。基准数据集上的广泛节点分类实验验证了我们的图形对比学习方法的有效性。
translated by 谷歌翻译